source('../env.R')
Linking to GEOS 3.11.0, GDAL 3.5.3, PROJ 9.1.0; sf_use_s2() is FALSE
── Attaching core tidyverse packages ─────────────────────────────────────────────────────────────────────────────────── tidyverse 2.0.0 ──
✔ dplyr 1.1.4 ✔ readr 2.1.5
✔ forcats 1.0.0 ✔ stringr 1.5.1
✔ lubridate 1.9.3 ✔ tibble 3.2.1
✔ purrr 1.0.2 ✔ tidyr 1.3.1── Conflicts ───────────────────────────────────────────────────────────────────────────────────────────────────── tidyverse_conflicts() ──
✖ dplyr::filter() masks stats::filter()
✖ dplyr::lag() masks stats::lag()
✖ lubridate::stamp() masks cowplot::stamp()
ℹ Use the ]8;;http://conflicted.r-lib.org/conflicted package]8;; to force all conflicts to become errors
Attaching package: ‘dbplyr’
The following objects are masked from ‘package:dplyr’:
ident, sql
Loading required package: ape
Attaching package: ‘ape’
The following object is masked from ‘package:dplyr’:
where
Loading required package: maps
Attaching package: ‘maps’
The following object is masked from ‘package:purrr’:
map
Please cite the eBird Status and Trends data using:
Fink, D., T. Auer, A. Johnston, M. Strimas-Mackey, S. Ligocki, O. Robinson,
W. Hochachka, L. Jaromczyk, C. Crowley, K. Dunham, A. Stillman, I. Davies,
A. Rodewald, V. Ruiz-Gutierrez, C. Wood. 2023.
eBird Status and Trends, Data Version: 2022; Released: 2023. Cornell Lab of
Ornithology, Ithaca, New York. https://doi.org/10.2173/ebirdst.2022
This version of the package provides access to the 2022 version of the eBird
Status and Trends Data Products. Access to the 2022 data will be provided
until November 2024 when it will be replaced by the 2023 data. At that
point, you will be required to update this R package and transition to using
the new data.
terra 1.7.71
Attaching package: ‘terra’
The following object is masked from ‘package:phytools’:
rescale
The following objects are masked from ‘package:ape’:
rotate, trans, zoom
The following object is masked from ‘package:tidyr’:
extract
Loading required package: permute
Loading required package: lattice
This is vegan 2.6-4
Attaching package: ‘vegan’
The following object is masked from ‘package:phytools’:
scores
Loading required package: mvtnorm
Loading required package: survival
Loading required package: TH.data
Loading required package: MASS
Attaching package: ‘MASS’
The following object is masked from ‘package:terra’:
area
The following object is masked from ‘package:dplyr’:
select
Attaching package: ‘TH.data’
The following object is masked from ‘package:MASS’:
geyser
Attaching package: ‘ggpubr’
The following object is masked from ‘package:terra’:
rotate
The following object is masked from ‘package:ape’:
rotate
The following object is masked from ‘package:cowplot’:
get_legend
Loading required package: nlme
Attaching package: ‘nlme’
The following object is masked from ‘package:dplyr’:
collapse
Registered S3 method overwritten by 'GGally':
method from
+.gg ggplot2
Attaching package: ‘GGally’
The following object is masked from ‘package:terra’:
wrap
ggtree v3.11.0 For help: https://yulab-smu.top/treedata-book/
If you use the ggtree package suite in published research, please cite the appropriate paper(s):
Guangchuang Yu, David Smith, Huachen Zhu, Yi Guan, Tommy Tsan-Yuk Lam. ggtree: an R package for visualization and annotation
of phylogenetic trees with their covariates and other associated data. Methods in Ecology and Evolution. 2017, 8(1):28-36.
doi:10.1111/2041-210X.12628
Shuangbin Xu, Lin Li, Xiao Luo, Meijun Chen, Wenli Tang, Li Zhan, Zehan Dai, Tommy T. Lam, Yi Guan, Guangchuang Yu. Ggtree:
A serialized data object for visualization of a phylogenetic tree and annotation data. iMeta 2022, 1(4):e56.
doi:10.1002/imt2.56
LG Wang, TTY Lam, S Xu, Z Dai, L Zhou, T Feng, P Guo, CW Dunn, BR Jones, T Bradley, H Zhu, Y Guan, Y Jiang, G Yu. treeio: an
R package for phylogenetic tree input and output with richly annotated and associated data. Molecular Biology and Evolution.
2020, 37(2):599-603. doi: 10.1093/molbev/msz240
Attaching package: ‘ggtree’
The following object is masked from ‘package:nlme’:
collapse
The following object is masked from ‘package:ggpubr’:
rotate
The following objects are masked from ‘package:terra’:
flip, inset, rotate
The following object is masked from ‘package:ape’:
rotate
The following object is masked from ‘package:tidyr’:
expand
Attaching package: ‘foreach’
The following objects are masked from ‘package:purrr’:
accumulate, when
Attaching package: ‘scales’
The following object is masked from ‘package:terra’:
rescale
The following object is masked from ‘package:phytools’:
rescale
The following object is masked from ‘package:purrr’:
discard
The following object is masked from ‘package:readr’:
col_factor
community_data = read_csv(filename(COMMUNITY_OUTPUT_DIR, 'community_assembly_metrics_using_relative_abundance.csv'))
Rows: 341 Columns: 43── Column specification ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Delimiter: ","
dbl (43): mntd_normalised, mntd_actual, mntd_min, mntd_max, mntd_mean, mntd_sd, fdiv_normalised, fdiv_actual, fdiv_min, fdiv_max, fdiv_...
ℹ Use `spec()` to retrieve the full column specification for this data.
ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message.
head(community_data)
colnames(community_data)
[1] "mntd_normalised" "mntd_actual" "mntd_min"
[4] "mntd_max" "mntd_mean" "mntd_sd"
[7] "fdiv_normalised" "fdiv_actual" "fdiv_min"
[10] "fdiv_max" "fdiv_mean" "fdiv_sd"
[13] "mass_fdiv_normalised" "mass_fdiv_actual" "mass_fdiv_min"
[16] "mass_fdiv_max" "mass_fdiv_mean" "mass_fdiv_sd"
[19] "locomotory_trait_fdiv_normalised" "locomotory_trait_fdiv_actual" "locomotory_trait_fdiv_min"
[22] "locomotory_trait_fdiv_max" "locomotory_trait_fdiv_mean" "locomotory_trait_fdiv_sd"
[25] "trophic_trait_fdiv_normalised" "trophic_trait_fdiv_actual" "trophic_trait_fdiv_min"
[28] "trophic_trait_fdiv_max" "trophic_trait_fdiv_mean" "trophic_trait_fdiv_sd"
[31] "gape_width_fdiv_normalised" "gape_width_fdiv_actual" "gape_width_fdiv_min"
[34] "gape_width_fdiv_max" "gape_width_fdiv_mean" "gape_width_fdiv_sd"
[37] "handwing_index_fdiv_normalised" "handwing_index_fdiv_actual" "handwing_index_fdiv_min"
[40] "handwing_index_fdiv_max" "handwing_index_fdiv_mean" "handwing_index_fdiv_sd"
[43] "city_id"
Join on realms
city_to_realm = read_csv(filename(CITY_DATA_OUTPUT_DIR, 'realms.csv'))
Rows: 342 Columns: 2── Column specification ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Delimiter: ","
chr (1): core_realm
dbl (1): city_id
ℹ Use `spec()` to retrieve the full column specification for this data.
ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message.
community_data_with_realm = left_join(community_data, city_to_realm)
Joining with `by = join_by(city_id)`
Cities as points
city_points = st_centroid(read_sf(filename(CITY_DATA_OUTPUT_DIR, 'city_selection.shp'))) %>% left_join(community_data_with_realm)
Warning: st_centroid assumes attributes are constant over geometriesWarning: st_centroid does not give correct centroids for longitude/latitude dataJoining with `by = join_by(city_id)`
city_points_coords = st_coordinates(city_points)
city_points$latitude = city_points_coords[,1]
city_points$longitude = city_points_coords[,2]
world_map = read_country_boundaries()
Error: Cannot open "/Users/james/Dropbox/PhD/WorldBank_countries_Admin0_10m/WB_countries_Admin0_10m.shp"; The file doesn't seem to exist.
Load community data, and create long format version
communities = read_csv(filename(COMMUNITY_OUTPUT_DIR, 'communities_for_analysis.csv'))
Rows: 2462 Columns: 12── Column specification ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Delimiter: ","
chr (5): city_name, jetz_species_name, seasonal, presence, origin
dbl (4): city_id, distance_to_northern_edge_km, distance_to_southern_edge_km, relative_abundance_proxy
lgl (3): present_urban_high, present_urban_med, present_urban_low
ℹ Use `spec()` to retrieve the full column specification for this data.
ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message.
communities
community_summary = communities %>% group_by(city_id) %>% summarise(regional_pool_size = n(), urban_pool_size = sum(relative_abundance_proxy > 0))
community_summary
Load trait data
traits = read_csv(filename(TAXONOMY_OUTPUT_DIR, 'traits_jetz.csv'))
Rows: 304 Columns: 6── Column specification ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Delimiter: ","
chr (1): jetz_species_name
dbl (5): gape_width, trophic_trait, locomotory_trait, mass, handwing_index
ℹ Use `spec()` to retrieve the full column specification for this data.
ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message.
head(traits)
Load realm geo
test_required_values = function(name, df) {
cat(paste(
test_value_wilcox(paste(name, 'MNTD'), df$mntd_normalised),
test_value_wilcox(paste(name, 'Beak Gape FDiv'), df$gape_width_fdiv_normalised),
test_value_wilcox(paste(name, 'HWI FDiv'), df$handwing_index_fdiv_normalised),
test_value_wilcox(paste(name, 'Mass FDiv'), df$mass_fdiv_normalised),
nrow(df),
sep = "\n"))
}
test_required_values('Global', community_data_with_realm)
Global MNTD median 0.49
Global Beak Gape FDiv median 0.59 ***
Global HWI FDiv median 0.66 ***
Global Mass FDiv median 0.65 ***
341
unique(community_data_with_realm$core_realm)
[1] "Oceania" "Nearctic" "Neotropic" "Palearctic" "Afrotropic" "Indomalayan" "Australasia"
test_required_values('Nearctic', community_data_with_realm[community_data_with_realm$core_realm == 'Nearctic',])
Warning: cannot compute exact p-value with tiesWarning: cannot compute exact p-value with ties
Nearctic MNTD median 0.65
Nearctic Beak Gape FDiv median 0.63
Nearctic HWI FDiv median 0.23 **
Nearctic Mass FDiv median 0.48
66
test_required_values('Neotropic', community_data_with_realm[community_data_with_realm$core_realm == 'Neotropic',])
Neotropic MNTD median 0.53
Neotropic Beak Gape FDiv median 0.46
Neotropic HWI FDiv median 0.51
Neotropic Mass FDiv median 0.65 ***
64
test_required_values('Palearctic', community_data_with_realm[community_data_with_realm$core_realm == 'Palearctic',])
Palearctic MNTD median 0.59 *
Palearctic Beak Gape FDiv median 0.93 ***
Palearctic HWI FDiv median 0.4
Palearctic Mass FDiv median 0.55
74
test_required_values('Afrotropic', community_data_with_realm[community_data_with_realm$core_realm == 'Afrotropic',])
Afrotropic MNTD median 0.17 *
Afrotropic Beak Gape FDiv median 0.4
Afrotropic HWI FDiv median 0.54
Afrotropic Mass FDiv median 0.34
9
test_required_values('Indomalayan', community_data_with_realm[community_data_with_realm$core_realm == 'Indomalayan',])
Indomalayan MNTD median 0.43 ***
Indomalayan Beak Gape FDiv median 0.47
Indomalayan HWI FDiv median 0.88 ***
Indomalayan Mass FDiv median 0.81 ***
121
test_required_values('Australasia', community_data_with_realm[community_data_with_realm$core_realm == 'Australasia',])
Australasia MNTD median 0.53
Australasia Beak Gape FDiv median 0.46
Australasia HWI FDiv median 0.79
Australasia Mass FDiv median 0.55
6
communities = read_csv(filename(COMMUNITY_OUTPUT_DIR, 'communities_for_analysis.csv'))
Rows: 2462 Columns: 12── Column specification ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Delimiter: ","
chr (5): city_name, jetz_species_name, seasonal, presence, origin
dbl (4): city_id, distance_to_northern_edge_km, distance_to_southern_edge_km, relative_abundance_proxy
lgl (3): present_urban_high, present_urban_med, present_urban_low
ℹ Use `spec()` to retrieve the full column specification for this data.
ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message.
city_introduced_species = communities %>% group_by(city_id) %>% summarise(number_of_species = n()) %>% left_join(
communities %>% group_by(city_id) %>% filter(origin == 'Introduced') %>% summarise(number_of_introduced_species = n())
) %>% replace_na(list(number_of_introduced_species = 0))
Joining with `by = join_by(city_id)`
community_data_with_introductions = left_join(community_data, city_introduced_species)
Joining with `by = join_by(city_id)`
community_data_with_introductions$has_introduced_species = community_data_with_introductions$number_of_introduced_species > 0
community_data_with_introductions
community_data_with_introductions[,c('mntd_normalised', 'has_introduced_species')]
community_data_with_introductions %>% group_by(has_introduced_species) %>% summarise(
mean_mntd_normalised = mean(mntd_normalised, na.rm = T),
sd_mntd_normalised = sd(mntd_normalised, na.rm = T),
mean_mass_fdiv_normalised = mean(mass_fdiv_normalised, na.rm = T),
sd_mass_fdiv_normalised = sd(mass_fdiv_normalised, na.rm = T),
mean_gape_width_fdiv_normalised = mean(gape_width_fdiv_normalised, na.rm = T),
sd_gape_width_fdiv_normalised = sd(gape_width_fdiv_normalised, na.rm = T),
mean_handwing_index_fdiv_normalised = mean(handwing_index_fdiv_normalised, na.rm = T),
sd_handwing_index_fdiv_normalised = sd(handwing_index_fdiv_normalised, na.rm = T)
)
ggplot(community_data_with_introductions, aes(x = has_introduced_species, y = mntd_normalised)) + geom_boxplot()
wilcox.test(mntd_normalised ~ has_introduced_species, community_data_with_introductions, na.action = 'na.omit')
Wilcoxon rank sum test with continuity correction
data: mntd_normalised by has_introduced_species
W = 10956, p-value = 0.02272
alternative hypothesis: true location shift is not equal to 0
There is a significant difference between the response of cities with introduced species (0.53±0.27) and those without (0.47±0.19) (p-value = 0.02).
ggplot(community_data_with_introductions, aes(x = has_introduced_species, y = mass_fdiv_normalised)) + geom_boxplot()
wilcox.test(mass_fdiv_normalised ~ has_introduced_species, community_data_with_introductions, na.action = 'na.omit')
Wilcoxon rank sum test with continuity correction
data: mass_fdiv_normalised by has_introduced_species
W = 17286, p-value = 0.00000004887
alternative hypothesis: true location shift is not equal to 0
There is a significant difference between the response of cities with introduced species (0.57±0.27) and those without (0.73±0.24) (p < 0.0001)
ggplot(community_data_with_introductions, aes(x = has_introduced_species, y = gape_width_fdiv_normalised)) + geom_boxplot()
wilcox.test(gape_width_fdiv_normalised ~ has_introduced_species, community_data_with_introductions, na.action = 'na.omit')
Wilcoxon rank sum test with continuity correction
data: gape_width_fdiv_normalised by has_introduced_species
W = 10990, p-value = 0.1178
alternative hypothesis: true location shift is not equal to 0
There is NOT a significant difference between the response of cities with introduced species (0.61±0.30) and those without (0.56±0.27)
ggplot(community_data_with_introductions, aes(x = has_introduced_species, y = handwing_index_fdiv_normalised)) + geom_boxplot()
wilcox.test(handwing_index_fdiv_normalised ~ has_introduced_species, community_data_with_introductions, na.action = 'na.omit')
Wilcoxon rank sum test with continuity correction
data: handwing_index_fdiv_normalised by has_introduced_species
W = 19410, p-value < 0.00000000000000022
alternative hypothesis: true location shift is not equal to 0
There is a significant difference between the response of cities with introduced species (0.49±0.30) and those without (0.79±0.21) (p < 0.0001)
geography = read_csv(filename(CITY_DATA_OUTPUT_DIR, 'geography.csv'))
Rows: 342 Columns: 26── Column specification ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Delimiter: ","
dbl (26): city_id, city_avg_ndvi, city_avg_elevation, city_avg_temp, city_avg_min_monthly_temp, city_avg_max_monthly_temp, city_avg_mon...
ℹ Use `spec()` to retrieve the full column specification for this data.
ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message.
names(geography)
[1] "city_id" "city_avg_ndvi" "city_avg_elevation" "city_avg_temp"
[5] "city_avg_min_monthly_temp" "city_avg_max_monthly_temp" "city_avg_monthly_temp" "city_avg_rainfall"
[9] "city_avg_max_monthly_rainfall" "city_avg_min_monthly_rainfall" "city_avg_soil_moisture" "city_max_elev"
[13] "city_min_elev" "city_elev_range" "region_20km_avg_ndvi" "region_20km_avg_elevation"
[17] "region_20km_avg_soil_moisture" "region_20km_max_elev" "region_20km_min_elev" "region_20km_elev_range"
[21] "region_50km_avg_ndvi" "region_50km_avg_elevation" "region_50km_avg_soil_moisture" "region_50km_max_elev"
[25] "region_50km_min_elev" "region_50km_elev_range"
analysis_data = community_data_with_realm[,c('city_id', 'mntd_normalised', 'mass_fdiv_normalised', 'gape_width_fdiv_normalised', 'handwing_index_fdiv_normalised', 'core_realm')] %>%
left_join(city_points[,c('city_id', 'latitude', 'longitude')]) %>%
left_join(community_data_with_introductions[,c('city_id', 'has_introduced_species')]) %>%
left_join(geography)
Joining with `by = join_by(city_id)`Joining with `by = join_by(city_id)`Joining with `by = join_by(city_id)`
analysis_data$abs_latitude = abs(analysis_data$latitude)
analysis_data$core_realm = factor(analysis_data$core_realm, levels = c('Palearctic', 'Nearctic', 'Neotropic', 'Afrotropic', 'Indomalayan', 'Australasia', 'Oceania'))
analysis_data$has_introduced_species = factor(analysis_data$has_introduced_species, level = c('TRUE', 'FALSE'), labels = c('Introduced species', 'No introduced species'))
model_data = function(df, dependant_var) {
df[,c(dependant_var, 'core_realm', 'abs_latitude', 'latitude', 'longitude', 'has_introduced_species', 'city_avg_ndvi', 'city_avg_elevation', 'city_avg_temp', 'city_avg_min_monthly_temp', 'city_avg_max_monthly_temp', 'city_avg_monthly_temp', 'city_avg_rainfall', 'city_avg_max_monthly_rainfall', 'city_avg_min_monthly_rainfall', 'city_avg_soil_moisture', 'city_max_elev', 'city_min_elev', 'city_elev_range', 'region_20km_avg_ndvi', 'region_20km_avg_elevation', 'region_20km_avg_soil_moisture', 'region_20km_max_elev', 'region_20km_min_elev', 'region_20km_elev_range', 'region_50km_avg_ndvi', 'region_50km_avg_elevation', 'region_50km_avg_soil_moisture', 'region_50km_max_elev', 'region_50km_min_elev', 'region_50km_elev_range')]
}
model_data(analysis_data, 'mntd_normalised')
names(analysis_data)
[1] "city_id" "mntd_normalised" "mass_fdiv_normalised" "gape_width_fdiv_normalised"
[5] "handwing_index_fdiv_normalised" "core_realm" "latitude" "longitude"
[9] "geometry" "has_introduced_species" "city_avg_ndvi" "city_avg_elevation"
[13] "city_avg_temp" "city_avg_min_monthly_temp" "city_avg_max_monthly_temp" "city_avg_monthly_temp"
[17] "city_avg_rainfall" "city_avg_max_monthly_rainfall" "city_avg_min_monthly_rainfall" "city_avg_soil_moisture"
[21] "city_max_elev" "city_min_elev" "city_elev_range" "region_20km_avg_ndvi"
[25] "region_20km_avg_elevation" "region_20km_avg_soil_moisture" "region_20km_max_elev" "region_20km_min_elev"
[29] "region_20km_elev_range" "region_50km_avg_ndvi" "region_50km_avg_elevation" "region_50km_avg_soil_moisture"
[33] "region_50km_max_elev" "region_50km_min_elev" "region_50km_elev_range" "abs_latitude"
all_explanatories = c(
'abs_latitude', 'latitude', 'longitude',
'has_introduced_species',
'city_avg_ndvi', 'city_avg_elevation', 'city_avg_temp', 'city_avg_min_monthly_temp', 'city_avg_max_monthly_temp',
'city_avg_monthly_temp', 'city_avg_rainfall', 'city_avg_max_monthly_rainfall', 'city_avg_min_monthly_rainfall',
'city_avg_soil_moisture', 'city_max_elev', 'city_min_elev', 'city_elev_range',
'region_20km_avg_ndvi', 'region_20km_avg_elevation', 'region_20km_avg_soil_moisture', 'region_20km_max_elev',
'region_20km_min_elev', 'region_20km_elev_range',
'region_50km_avg_ndvi', 'region_50km_avg_elevation', 'region_50km_avg_soil_moisture', 'region_50km_max_elev',
'region_50km_min_elev', 'region_50km_elev_range',
'core_realmAfrotropic', 'core_realmAustralasia', 'core_realmIndomalayan', 'core_realmNearctic', 'core_realmNeotropic', 'core_realmPalearctic')
type_labels = function(p) {
explanatory_levels = all_explanatories[all_explanatories %in% p$explanatory]
p$explanatory <- factor(p$explanatory, levels = explanatory_levels)
p$type <- 'Realm'
p$type[p$explanatory %in% c('city_avg_ndvi', 'city_avg_elevation', 'city_avg_temp', 'city_avg_min_monthly_temp', 'city_avg_max_monthly_temp',
'city_avg_monthly_temp', 'city_avg_rainfall', 'city_avg_max_monthly_rainfall', 'city_avg_min_monthly_rainfall',
'city_avg_soil_moisture', 'city_max_elev', 'city_min_elev', 'city_elev_range')] <- 'City geography'
p$type[p$explanatory %in% c('region_50km_avg_ndvi', 'region_50km_avg_elevation', 'region_50km_avg_soil_moisture', 'region_50km_max_elev',
'region_50km_min_elev', 'region_50km_elev_range')] <- 'Regional (50km) geography'
p$type[p$explanatory %in% c('region_20km_avg_ndvi', 'region_20km_avg_elevation', 'region_20km_avg_soil_moisture', 'region_20km_max_elev',
'region_20km_min_elev', 'region_20km_elev_range')] <- 'Regional (20km) geography'
p$type[p$explanatory %in% c('abs_latitude', 'latitude', 'longitude')] <- 'Spatial'
p
}
explanatory_labels = c(
'has_introduced_species'='Has introduced species',
'city_avg_ndvi'='Average NDVI',
'city_avg_elevation'='Average elevation',
'city_avg_temp'='Average temperature',
'city_avg_min_monthly_temp'='Average minimum monthly temperature',
'city_avg_max_monthly_temp'='Average maximum monthly temperature',
'city_avg_monthly_temp'='Average monthly temperature',
'city_avg_rainfall'='Average rainfall',
'city_avg_max_monthly_rainfall'='Average maximum monthly rainfall',
'city_avg_min_monthly_rainfall'='Average minimum monthly rainfall',
'city_avg_soil_moisture'='Average soil moisture',
'city_max_elev'='Maximum elevation',
'city_min_elev'='Minimum elevation',
'city_elev_range'='Elevation range',
'region_20km_avg_ndvi'='Average NDVI',
'region_20km_avg_elevation'='Average elevation',
'region_20km_avg_soil_moisture'='Average soil moisture',
'region_20km_max_elev'='Maximum elevation',
'region_20km_min_elev'='Minimum elevation',
'region_20km_elev_range'='Elevation range',
'region_50km_avg_ndvi'='Average NDVI',
'region_50km_avg_elevation'='Average elevation',
'region_50km_avg_soil_moisture'='Average soil moisture',
'region_50km_max_elev'='Maximum elevation',
'region_50km_min_elev'='Minimum elevation',
'region_50km_elev_range'='Elevation range',
'abs_latitude' = 'Absolute latitude',
'latitude' = 'Latitude',
'longitude' = 'Longitude',
'core_realmAfrotropic' = 'Afrotropical',
'core_realmAustralasia' = 'Austaliasian',
'core_realmIndomalayan' = 'Indomalayan',
'core_realmNearctic' = 'Nearctic',
'core_realmNeotropic' = 'Neotropical',
'core_realmPalearctic' = 'Palearctic',
'core_realmOceania' = 'Oceanical')
geom_normalised_histogram = function(name, gg, legend.position = "bottom") {
gg +
geom_histogram(aes(fill = core_realm), binwidth = 0.1, position = "dodge") +
geom_vline(aes(xintercept = 0.5), color = "#000000", size = 0.4) +
geom_vline(aes(xintercept = 0), color = "#000000", size = 0.2, linetype = "dashed") +
geom_vline(aes(xintercept = 1), color = "#000000", size = 0.2, linetype = "dashed") +
ylab("Number of cities") + xlab("Normalised Response") + ylim(c(0, 70)) +
labs(title = name, fill = 'Realm') +
theme_bw() +
theme(legend.position=legend.position)
}
geom_map = function(map_sf, title) {
norm_mntd_analysis_geo = ggplot() +
geom_sf(data = world_map, aes(geometry = geometry)) +
map_sf +
normalised_colours_scale +
labs(colour = 'Normalised\nResponse') +
theme_bw() +
theme(legend.position="bottom")
}
# Taken from MuMIN package
# https://rdrr.io/cran/MuMIn/src/R/averaging.R
# https://rdrr.io/cran/MuMIn/src/R/model.avg.R
.coefarr.avg <-
function(cfarr, weight, revised.var, full, alpha) {
weight <- weight / sum(weight)
nCoef <- dim(cfarr)[3L]
if(full) {
nas <- is.na(cfarr[, 1L, ]) & is.na(cfarr[, 2L, ])
cfarr[, 1L, ][nas] <- cfarr[, 2L, ][nas] <- 0
#cfarr[, 1L:2L, ][is.na(cfarr[, 1L:2L, ])] <- 0
if(!all(is.na(cfarr[, 3L, ])))
cfarr[ ,3L, ][is.na(cfarr[ , 3L, ])] <- Inf
}
avgcoef <- array(dim = c(nCoef, 5L),
dimnames = list(dimnames(cfarr)[[3L]], c("Estimate",
"Std. Error", "Adjusted SE", "Lower CI", "Upper CI")))
for(i in seq_len(nCoef))
avgcoef[i, ] <- par.avg(cfarr[, 1L, i], cfarr[, 2L, i], weight,
df = cfarr[, 3L, i], alpha = alpha, revised.var = revised.var)
avgcoef[is.nan(avgcoef)] <- NA
return(avgcoef)
}
.makecoefmat <- function(cf) {
no.ase <- all(is.na(cf[, 3L]))
z <- abs(cf[, 1L] / cf[, if(no.ase) 2L else 3L])
pval <- 2 * pnorm(z, lower.tail = FALSE)
cbind(cf[, if(no.ase) 1L:2L else 1L:3L, drop = FALSE],
`z value` = z, `Pr(>|z|)` = zapsmall(pval))
}
# Generate model selections using lmer, dredge, and model.avg
# `forumla` : a two-sided linear formula object describing both the fixed-effects and random-effects part of the model
# `data` : the data frame containing the variables from the formula
# `aic_delta` : the AIC delta to use for selecting models in model average
model_average <- function(formula, data, aic_delta = 20) {
model <- lm(
formula,
data=data
)
dredge_result <- dredge(model)
summary(model.avg(dredge_result, subset = delta < aic_delta))
}
# Create a summary data frame containing the selected variables from a model
# `model_sum` : The model summary output from `model_average`
model_summary <- function(model_sum) {
.column_name <- function(postfix) {
postfix
}
# just return the estimate and p value
weight <- model_sum$msTable[, 5L]
coefmat.full <- as.data.frame(.makecoefmat(.coefarr.avg(model_sum$coefArray, weight,
attr(model_sum, "revised.var"), TRUE, 0.05)))
coefmat.subset <-
as.data.frame(.makecoefmat(.coefarr.avg(model_sum$coefArray, weight,
attr(model_sum, "revised.var"), FALSE, 0.05)))
coefmat.subset <- coefmat.subset[-c(1), c(1, 2, 5)]
names(coefmat.subset) <- c(.column_name("estimate"), .column_name("error"), .column_name("p"))
coefmat.subset <- tibble::rownames_to_column(coefmat.subset, "explanatory")
coefmat.subset$model = 'subset'
coefmat.full <- coefmat.full[-c(1), c(1, 2, 5)]
names(coefmat.full) <- c(.column_name("estimate"), .column_name("error"), .column_name("p"))
coefmat.full <- tibble::rownames_to_column(coefmat.full, "explanatory")
coefmat.full$model = 'full'
rbind(coefmat.full, coefmat.subset)
}
formula_from_vsurp = function(predictors, dependent, vsurp_result) {
as.formula(paste(dependent, paste(names(predictors[,vsurp_result$varselect.interp]), collapse="+"), sep = "~"))
}
plot_vsurp_result = function(result_table) {
plot = result_table[result_table$model == 'full',]
plot = type_labels(plot)
ggplot(plot, aes(y=explanatory, x=estimate, colour = type)) +
geom_line() +
geom_point()+
geom_errorbar(aes(xmin=estimate-error, xmax=estimate+error), width=.2,
position=position_dodge(0.05)) +
scale_y_discrete(
limits = rev(levels(plot$explanatory)),
labels = explanatory_labels) +
theme_bw() +
geom_vline(xintercept=0, linetype="dotted") +
guides(colour=guide_legend(title="Predictor type")) + xlab('Increase from 0 (habitat filtering)\nto 1 (competitive exclusion)\n± Standard Error') + ylab('Predictor') +
theme(legend.justification = "top")
}
norm_mntd_analysis_plot = geom_normalised_histogram(
'MNTD',
ggplot(analysis_data, aes(mntd_normalised))
)
norm_mntd_analysis_plot
norm_mntd_analysis_geo_plot = geom_map(geom_sf(data = analysis_data, aes(color = mntd_normalised, geometry = geometry)), 'MNTD')
norm_mntd_analysis_geo_plot
ggsave(filename(FIGURES_OUTPUT_DIR, 'normalised_mntd_using_abundance.jpg'), width = 2500, height=1100, units = 'px')
norm_mntd_analysis_data = model_data(analysis_data[!is.na(analysis_data$mntd_normalised),], 'mntd_normalised')
norm_mntd_analysis_predictors = norm_mntd_analysis_data[,-1]
norm_mntd_analysis_interp = VSURF(x = norm_mntd_analysis_predictors, y = norm_mntd_analysis_data$mntd_normalised)
Thresholding step
Estimated computational time (on one core): 18 sec.
|
| | 0%
|
|====== | 5%
|
|============= | 10%
|
|=================== | 15%
|
|========================== | 20%
|
|================================ | 25%
|
|======================================= | 30%
|
|============================================= | 35%
|
|==================================================== | 40%
|
|========================================================== | 45%
|
|================================================================ | 50%
|
|======================================================================= | 55%
|
|============================================================================= | 60%
|
|==================================================================================== | 65%
|
|========================================================================================== | 70%
|
|================================================================================================= | 75%
|
|======================================================================================================= | 80%
|
|============================================================================================================== | 85%
|
|==================================================================================================================== | 90%
|
|=========================================================================================================================== | 95%
|
|=================================================================================================================================| 100%
Interpretation step (on 29 variables)
Estimated computational time (on one core): between 7 sec. and 31.3 sec.
|
| | 0%
|
|==== | 3%
|
|========= | 7%
|
|============= | 10%
|
|================== | 14%
|
|====================== | 17%
|
|=========================== | 21%
|
|=============================== | 24%
|
|==================================== | 28%
|
|======================================== | 31%
|
|============================================ | 34%
|
|================================================= | 38%
|
|===================================================== | 41%
|
|========================================================== | 45%
|
|============================================================== | 48%
|
|=================================================================== | 52%
|
|======================================================================= | 55%
|
|============================================================================ | 59%
|
|================================================================================ | 62%
|
|===================================================================================== | 66%
|
|========================================================================================= | 69%
|
|============================================================================================= | 72%
|
|================================================================================================== | 76%
|
|====================================================================================================== | 79%
|
|=========================================================================================================== | 83%
|
|=============================================================================================================== | 86%
|
|==================================================================================================================== | 90%
|
|======================================================================================================================== | 93%
|
|============================================================================================================================= | 97%
|
|=================================================================================================================================| 100%
Prediction step (on 8 variables)
Maximum estimated computational time (on one core): 2.6 sec.
|
| | 0%
|
|================ | 12%
|
|================================ | 25%
|
|================================================ | 38%
|
|================================================================ | 50%
|
|================================================================================= | 62%
|
|================================================================================================= | 75%
|
|================================================================================================================= | 88%
|
|=================================================================================================================================| 100%
names(norm_mntd_analysis_predictors[,norm_mntd_analysis_interp$varselect.interp])
[1] "city_avg_monthly_temp" "city_avg_max_monthly_temp" "core_realm" "longitude"
[5] "city_avg_temp" "latitude" "abs_latitude" "city_avg_min_monthly_temp"
norm_mntd_analysis_formula = formula_from_vsurp(norm_mntd_analysis_predictors, "mntd_normalised", norm_mntd_analysis_interp)
norm_mntd_analysis_result <- model_average(norm_mntd_analysis_formula, norm_mntd_analysis_data)
Fixed term is "(Intercept)"
norm_mntd_analysis_result_table = model_summary(norm_mntd_analysis_result)
norm_mntd_analysis_result_table
norm_mntd_analysis_pred_plot = plot_vsurp_result(norm_mntd_analysis_result_table)
norm_mntd_analysis_pred_plot
norm_gape_fdiv_analysis_plot = geom_normalised_histogram(
'Gape FDiv',
ggplot(analysis_data, aes(gape_width_fdiv_normalised))
)
norm_gape_fdiv_analysis_plot
norm_gape_fdiv_analysis_geo_plot = geom_map(geom_sf(data = analysis_data, aes(color = gape_width_fdiv_normalised, geometry = geometry)), 'Gape Width FDiv')
norm_gape_fdiv_analysis_geo_plot
ggsave(filename(FIGURES_OUTPUT_DIR, 'normalised_gape_width_using_abundance.jpg'), width = 2500, height=1100, units = 'px')
norm_gape_fdiv_analysis_data = model_data(analysis_data[!is.na(analysis_data$gape_width_fdiv_normalised),], 'gape_width_fdiv_normalised')
norm_gape_fdiv_analysis_predictors = norm_gape_fdiv_analysis_data[,-1]
norm_gape_fdiv_analysis_interp = VSURF(x = norm_gape_fdiv_analysis_predictors, y = norm_gape_fdiv_analysis_data$gape_width_fdiv_normalised)
Thresholding step
Estimated computational time (on one core): 16.1 sec.
|
| | 0%
|
|====== | 5%
|
|============= | 10%
|
|=================== | 15%
|
|========================== | 20%
|
|================================ | 25%
|
|======================================= | 30%
|
|============================================= | 35%
|
|==================================================== | 40%
|
|========================================================== | 45%
|
|================================================================ | 50%
|
|======================================================================= | 55%
|
|============================================================================= | 60%
|
|==================================================================================== | 65%
|
|========================================================================================== | 70%
|
|================================================================================================= | 75%
|
|======================================================================================================= | 80%
|
|============================================================================================================== | 85%
|
|==================================================================================================================== | 90%
|
|=========================================================================================================================== | 95%
|
|=================================================================================================================================| 100%
Interpretation step (on 30 variables)
Estimated computational time (on one core): between 6.3 sec. and 33.3 sec.
|
| | 0%
|
|==== | 3%
|
|========= | 7%
|
|============= | 10%
|
|================= | 13%
|
|====================== | 17%
|
|========================== | 20%
|
|============================== | 23%
|
|================================== | 27%
|
|======================================= | 30%
|
|=========================================== | 33%
|
|=============================================== | 37%
|
|==================================================== | 40%
|
|======================================================== | 43%
|
|============================================================ | 47%
|
|================================================================ | 50%
|
|===================================================================== | 53%
|
|========================================================================= | 57%
|
|============================================================================= | 60%
|
|================================================================================== | 63%
|
|====================================================================================== | 67%
|
|========================================================================================== | 70%
|
|=============================================================================================== | 73%
|
|=================================================================================================== | 77%
|
|======================================================================================================= | 80%
|
|============================================================================================================ | 83%
|
|================================================================================================================ | 87%
|
|==================================================================================================================== | 90%
|
|======================================================================================================================== | 93%
|
|============================================================================================================================= | 97%
|
|=================================================================================================================================| 100%
Prediction step (on 11 variables)
Maximum estimated computational time (on one core): 4.5 sec.
|
| | 0%
|
|============ | 9%
|
|======================= | 18%
|
|=================================== | 27%
|
|=============================================== | 36%
|
|=========================================================== | 45%
|
|====================================================================== | 55%
|
|================================================================================== | 64%
|
|============================================================================================== | 73%
|
|========================================================================================================== | 82%
|
|===================================================================================================================== | 91%
|
|=================================================================================================================================| 100%
names(norm_gape_fdiv_analysis_predictors[,norm_gape_fdiv_analysis_interp$varselect.interp])
[1] "longitude" "core_realm" "abs_latitude" "city_avg_max_monthly_temp"
[5] "city_avg_temp" "city_avg_min_monthly_temp" "city_avg_max_monthly_rainfall" "city_avg_min_monthly_rainfall"
[9] "latitude" "city_avg_monthly_temp" "city_max_elev"
norm_gape_fdiv_analysis_formula = formula_from_vsurp(norm_gape_fdiv_analysis_predictors, "gape_width_fdiv_normalised", norm_gape_fdiv_analysis_interp)
norm_gape_fdiv_analysis_result <- model_average(norm_gape_fdiv_analysis_formula, norm_gape_fdiv_analysis_data)
Fixed term is "(Intercept)"
norm_gape_fdiv_analysis_result_table = model_summary(norm_gape_fdiv_analysis_result)
norm_gape_fdiv_analysis_result_table
norm_gape_fdiv_analysis_pred_plot = plot_vsurp_result(norm_gape_fdiv_analysis_result_table)
norm_gape_fdiv_analysis_pred_plot
norm_hwi_fdiv_loco_analysis_plot = geom_normalised_histogram(
'HWI FDiv',
ggplot(analysis_data, aes(handwing_index_fdiv_normalised))
)
norm_hwi_fdiv_loco_analysis_plot
norm_hwi_fdiv_analysis_geo_plot = geom_map(geom_sf(data = analysis_data, aes(color = handwing_index_fdiv_normalised, geometry = geometry)), 'HWI FDiv')
norm_hwi_fdiv_analysis_geo_plot
ggsave(filename(FIGURES_OUTPUT_DIR, 'normalised_hwi_using_abundance.jpg'), width = 2500, height=1100, units = 'px')
norm_hwi_fdiv_analysis_data = model_data(analysis_data[!is.na(analysis_data$handwing_index_fdiv_normalised),], 'handwing_index_fdiv_normalised')
norm_hwi_fdiv_analysis_predictors = norm_hwi_fdiv_analysis_data[,-1]
norm_hwi_fdiv_analysis_interp = VSURF(x = norm_hwi_fdiv_analysis_predictors, y = norm_hwi_fdiv_analysis_data$handwing_index_fdiv_normalised)
Thresholding step
Estimated computational time (on one core): 14.2 sec.
|
| | 0%
|
|====== | 5%
|
|============= | 10%
|
|=================== | 15%
|
|========================== | 20%
|
|================================ | 25%
|
|======================================= | 30%
|
|============================================= | 35%
|
|==================================================== | 40%
|
|========================================================== | 45%
|
|================================================================ | 50%
|
|======================================================================= | 55%
|
|============================================================================= | 60%
|
|==================================================================================== | 65%
|
|========================================================================================== | 70%
|
|================================================================================================= | 75%
|
|======================================================================================================= | 80%
|
|============================================================================================================== | 85%
|
|==================================================================================================================== | 90%
|
|=========================================================================================================================== | 95%
|
|=================================================================================================================================| 100%
Interpretation step (on 30 variables)
Estimated computational time (on one core): between 6 sec. and 29.1 sec.
|
| | 0%
|
|==== | 3%
|
|========= | 7%
|
|============= | 10%
|
|================= | 13%
|
|====================== | 17%
|
|========================== | 20%
|
|============================== | 23%
|
|================================== | 27%
|
|======================================= | 30%
|
|=========================================== | 33%
|
|=============================================== | 37%
|
|==================================================== | 40%
|
|======================================================== | 43%
|
|============================================================ | 47%
|
|================================================================ | 50%
|
|===================================================================== | 53%
|
|========================================================================= | 57%
|
|============================================================================= | 60%
|
|================================================================================== | 63%
|
|====================================================================================== | 67%
|
|========================================================================================== | 70%
|
|=============================================================================================== | 73%
|
|=================================================================================================== | 77%
|
|======================================================================================================= | 80%
|
|============================================================================================================ | 83%
|
|================================================================================================================ | 87%
|
|==================================================================================================================== | 90%
|
|======================================================================================================================== | 93%
|
|============================================================================================================================= | 97%
|
|=================================================================================================================================| 100%
Prediction step (on 8 variables)
Maximum estimated computational time (on one core): 2.3 sec.
|
| | 0%
|
|================ | 12%
|
|================================ | 25%
|
|================================================ | 38%
|
|================================================================ | 50%
|
|================================================================================= | 62%
|
|================================================================================================= | 75%
|
|================================================================================================================= | 88%
|
|=================================================================================================================================| 100%
names(norm_hwi_fdiv_analysis_predictors[,norm_hwi_fdiv_analysis_interp$varselect.interp])
[1] "latitude" "core_realm" "city_avg_max_monthly_temp" "city_avg_temp"
[5] "longitude" "city_avg_max_monthly_rainfall" "region_50km_elev_range" "region_50km_min_elev"
norm_hwi_fdiv_analysis_formula = formula_from_vsurp(norm_hwi_fdiv_analysis_predictors, "handwing_index_fdiv_normalised", norm_hwi_fdiv_analysis_interp)
norm_hwi_fdiv_analysis_result <- model_average(norm_hwi_fdiv_analysis_formula, norm_hwi_fdiv_analysis_data)
Fixed term is "(Intercept)"
norm_hwi_fdiv_analysis_result_table = model_summary(norm_hwi_fdiv_analysis_result)
norm_hwi_fdiv_analysis_result_table
norm_hwi_fdiv_analysis_pred_plot = plot_vsurp_result(norm_hwi_fdiv_analysis_result_table)
norm_hwi_fdiv_analysis_pred_plot
norm_mass_fdiv_loco_analysis_plot = geom_normalised_histogram(
'Mass FDiv',
ggplot(analysis_data, aes(mass_fdiv_normalised))
)
norm_mass_fdiv_loco_analysis_plot
norm_mass_fdiv_analysis_geo_plot = geom_map(geom_sf(data = analysis_data, aes(color = mass_fdiv_normalised, geometry = geometry)), 'Mass FDiv')
norm_mass_fdiv_analysis_geo_plot
ggsave(filename(FIGURES_OUTPUT_DIR, 'normalised_mass_using_abundance.jpg'), width = 2500, height=1100, units = 'px')
norm_mass_fdiv_analysis_data = model_data(analysis_data[!is.na(analysis_data$mass_fdiv_normalised),], 'mass_fdiv_normalised')
norm_mass_fdiv_analysis_predictors = norm_mass_fdiv_analysis_data[,-1]
norm_mass_fdiv_analysis_interp = VSURF(x = norm_mass_fdiv_analysis_predictors, y = norm_mass_fdiv_analysis_data$mass_fdiv_normalised)
Thresholding step
Estimated computational time (on one core): 15.8 sec.
|
| | 0%
|
|====== | 5%
|
|============= | 10%
|
|=================== | 15%
|
|========================== | 20%
|
|================================ | 25%
|
|======================================= | 30%
|
|============================================= | 35%
|
|==================================================== | 40%
|
|========================================================== | 45%
|
|================================================================ | 50%
|
|======================================================================= | 55%
|
|============================================================================= | 60%
|
|==================================================================================== | 65%
|
|========================================================================================== | 70%
|
|================================================================================================= | 75%
|
|======================================================================================================= | 80%
|
|============================================================================================================== | 85%
|
|==================================================================================================================== | 90%
|
|=========================================================================================================================== | 95%
|
|=================================================================================================================================| 100%
Interpretation step (on 30 variables)
Estimated computational time (on one core): between 1.5 sec. and 32.4 sec.
|
| | 0%
|
|==== | 3%
|
|========= | 7%
|
|============= | 10%
|
|================= | 13%
|
|====================== | 17%
|
|========================== | 20%
|
|============================== | 23%
|
|================================== | 27%
|
|======================================= | 30%
|
|=========================================== | 33%
|
|=============================================== | 37%
|
|==================================================== | 40%
|
|======================================================== | 43%
|
|============================================================ | 47%
|
|================================================================ | 50%
|
|===================================================================== | 53%
|
|========================================================================= | 57%
|
|============================================================================= | 60%
|
|================================================================================== | 63%
|
|====================================================================================== | 67%
|
|========================================================================================== | 70%
|
|=============================================================================================== | 73%
|
|=================================================================================================== | 77%
|
|======================================================================================================= | 80%
|
|============================================================================================================ | 83%
|
|================================================================================================================ | 87%
|
|==================================================================================================================== | 90%
|
|======================================================================================================================== | 93%
|
|============================================================================================================================= | 97%
|
|=================================================================================================================================| 100%
Prediction step (on 5 variables)
Maximum estimated computational time (on one core): 0.9 sec.
|
| | 0%
|
|========================== | 20%
|
|==================================================== | 40%
|
|============================================================================= | 60%
|
|======================================================================================================= | 80%
|
|=================================================================================================================================| 100%
names(norm_mass_fdiv_analysis_predictors[,norm_mass_fdiv_analysis_interp$varselect.interp])
[1] "core_realm" "abs_latitude" "city_avg_temp" "latitude"
[5] "city_avg_max_monthly_rainfall"
norm_mass_fdiv_analysis_formula = formula_from_vsurp(norm_mass_fdiv_analysis_predictors, "mass_fdiv_normalised", norm_mass_fdiv_analysis_interp)
norm_mass_fdiv_analysis_result <- model_average(norm_mass_fdiv_analysis_formula, norm_mass_fdiv_analysis_data)
Fixed term is "(Intercept)"
norm_mass_fdiv_analysis_result_table = model_summary(norm_mass_fdiv_analysis_result)
norm_mass_fdiv_analysis_result_table
norm_mass_fdiv_analysis_pred_plot = plot_vsurp_result(norm_mass_fdiv_analysis_result_table)
norm_mass_fdiv_analysis_pred_plot
pred_legend <- get_legend(
# create some space to the left of the legend
norm_hwi_fdiv_analysis_pred_plot + theme(legend.box.margin = margin(0, 0, 0, 12))
)
Warning: Removed 1 row containing missing values (`geom_line()`).`geom_line()`: Each group consists of only one observation.
ℹ Do you need to adjust the group aesthetic?Warning: Removed 1 rows containing missing values (`geom_point()`).
geo_legend <- get_legend(
# create some space to the left of the legend
norm_mass_fdiv_analysis_geo_plot + theme(legend.box.margin = margin(0, 0, 0, 12))
)
plot_grid(
plot_grid(
norm_mntd_analysis_geo_plot + theme(legend.position="none"),
norm_mntd_analysis_pred_plot + theme(legend.position="none"),
nrow = 1
) + draw_label("MNTD", size = 16, angle = 90, x = 0.01, y = 0.5),
plot_grid(
norm_gape_fdiv_analysis_geo_plot + theme(legend.position="none"),
norm_gape_fdiv_analysis_pred_plot + theme(legend.position="none"),
nrow = 1
) + draw_label("Gape", size = 16, angle = 90, x = 0.01, y = 0.5),
plot_grid(
norm_hwi_fdiv_analysis_geo_plot + theme(legend.position="none"),
norm_hwi_fdiv_analysis_pred_plot + theme(legend.position="none"),
nrow = 1
) + draw_label("HWI", size = 16, angle = 90, x = 0.01, y = 0.5),
plot_grid(
norm_mass_fdiv_analysis_geo_plot + theme(legend.position="none"),
norm_mass_fdiv_analysis_pred_plot + theme(legend.position="none"),
nrow = 1
) + draw_label("Mass", size = 16, angle = 90, x = 0.01, y = 0.5),
plot_grid(
geo_legend,
pred_legend,
nrow = 1
),
nrow = 5
)
`geom_line()`: Each group consists of only one observation.
ℹ Do you need to adjust the group aesthetic?`geom_line()`: Each group consists of only one observation.
ℹ Do you need to adjust the group aesthetic?Warning: Removed 1 row containing missing values (`geom_line()`).`geom_line()`: Each group consists of only one observation.
ℹ Do you need to adjust the group aesthetic?Warning: Removed 1 rows containing missing values (`geom_point()`).Warning: Removed 1 row containing missing values (`geom_line()`).`geom_line()`: Each group consists of only one observation.
ℹ Do you need to adjust the group aesthetic?Warning: Removed 1 rows containing missing values (`geom_point()`).
ggsave(filename(FIGURES_OUTPUT_DIR, 'process_response.jpg'), width = 2500, height=5000, units = 'px')
ggplot(analysis_data, aes(x = gape_width_fdiv_normalised, y = mntd_normalised, colour = core_realm)) +
geom_point() +
ylab("MNTD") +
xlab("Gape Width FDiv") +
theme_bw() + labs(color = "Realm")
ggplot(analysis_data, aes(x = handwing_index_fdiv_normalised, y = mntd_normalised, colour = core_realm)) +
geom_point() +
ylab("MNTD") +
xlab("HWI FDiv") +
theme_bw() + labs(color = "Realm")
ggplot(analysis_data, aes(x = handwing_index_fdiv_normalised, y = gape_width_fdiv_normalised, colour = core_realm)) +
geom_point() +
ylab("Gape Width FDiv") +
xlab("HWI FDiv") +
theme_bw() + labs(color = "Realm")
mntd_fdiv_analysis = analysis_data %>%
dplyr::select(city_id, mntd_normalised, handwing_index_fdiv_normalised, gape_width_fdiv_normalised) %>%
left_join(community_summary) %>%
mutate(urban_pool_perc = urban_pool_size * 100 / regional_pool_size)
Joining with `by = join_by(city_id)`
mntd_fdiv_analysis
ggpairs(mntd_fdiv_analysis %>% dplyr::select(mntd_normalised, handwing_index_fdiv_normalised, gape_width_fdiv_normalised, regional_pool_size, urban_pool_size, urban_pool_perc))
ggsave(filename(FIGURES_OUTPUT_DIR, 'appendix_normalised_correlation.jpg'))
Saving 7.29 x 4.51 in image